perm filename GUNKEL[S83,JMC] blob
sn#717197 filedate 1983-06-20 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 gunkel[s83,jmc] About Patrick Gunkel's proposed meeting
C00015 ENDMK
C⊗;
gunkel[s83,jmc] About Patrick Gunkel's proposed meeting
Patrick Gunkel proposes A CONFERENCE AND WHITE PAPER ON
"ARTIFICIAL INTELLIGENCE AND THE FUTURE OF AMERICA" in a document
dated 1983 May 25. In my opinion the proposed conference is
probably not a good idea on account of the following circumstances.
1. Artificial intelligence (AI) is indeed as potentially revolutionary
as Gunkel says. It is also important to preserve America's lead
in the field.
2. AI is in a state where some useful applications are
possible, and many companies are hoping to cash in. However,
the state of the science limits the applications that can be
developed now. For example, robotic servants wait future
fundamental scientific discoveries. Let us compare artificial
intelligence with nuclear physics. In 1938 Rutherford could say
that he saw no possibility of nuclear energy for military use or
electric power. In 1939 when Fermi's 1933 experiments were correctly
interpreted as having exhibited fission, many scientists in many
countries immediately saw the possibility of a chain reaction
leading to bombs and power plants. There were differences of opinion
about whether a determined effort could develop bombs in time to
affect World War II. Thanks to Szilard's leadership, the U.S. was
the only country that drew the correct conclusion and mobilized
the necessary effort.
The situation in AI is quite different. There is no present
scientific basis for a Manhattan Project in AI, just as there was not before
fission was discovered. We cannot tell whether the situation
corresponds to that of 1938 or that of 1905 just after Einstein
published E = Mc↑2 as a consequence of the special theory of relativity.
(Perhaps the potential of nuclear energy wasn't really confirmed until
the masses of isotopes were know and mass defects were computed).
AI differs from nuclear physics in another respect that may or may
not be relevant in the present context. Namely, so far as we know,
we don't have to observe a natural phenomenon like fission. Instead
AI will be a construction of the human mind; we have only to program
computers sufficiently well. In this respect AI is like mathematics
rather than like physics in that the phenomena to be understood,
the relation between situations and the actions required to achieve
goals, are logical in character. It also resembles engineering in
that the point is to build something that works.
3. Given these facts, I don't like to encourage the establishment
to start Manhattan Projects. They are too likely to resemble the
nuclear airplane project of the 1950s that eventually collapsed in
failure and inhibits new attempts now that the technology has advanced.
Moreover, the ratio of AI people working on applied projects is
already too high. there is no more effort going into basic theoretical
and experimental AI research than there was in 1970, and the basic
theory is advancing rather slowly.
In my opinion the financial state of AI is as follows:
a. Good students who want to work in AI can find places
in graduate school and financial support. Too much of this support
is attached to applied and pseudo-applied projects. These are
projects that promise practical results a very few years but
almost never deliver. The demand for short range payoff that peaked
in Government research support had a more harmful effect in AI than
in almost any field.
b. Good PhDs are getting jobs, but again they have to make
promises of quick payoff in order to get research support.
4. The highest financial priority for AI is more research
money for unsolicited proposals judged solely on scientific merit
and not on adherence to some plan. I regard the DARPA speech
recognition project as having been mainly harmful, because it
focussed almost all American speech recognition research into
trying to satisfy the committee thus stifling diversity. It was
the very best committee, but the resuls were still harmful. A
committee planned research effort in general AI would be even
more harmful no matter who was on the committee. Einstein and
Planck and Schroedinger and Heisenberg weren't planned, and such
scientists are what AI needs.
5. The second priority is for postdoctoral fellowships
that will enable the holders to sit and think or write programs
according to their own choice. Therefore, it would be best if
the fellowships were to be awarded nationally rather than to
give more research associates to existing principal investigators.
Some of them should be awarded to smart people trained in other
fields who want to transfer to AI. AI is an attractive field
and many people with PhDs in other fields want to transfer to it.
6. The third priority is bricks and mortar for computer
science departments and endowed chairs. Because computer science
has expanded recently and will continue to expand, it is far
behind more established fields in its facilities. Providing
computers is also important, but this need has been recognized
and is being met.
7. The current enthusiasm for AI in companies and
governments is based on the expectation of a quick payoff.
Unless some fundamental discoveries are accidentally made,
these hopes are likely to be disappointed and AI will be
regarded as a fad. I oppose establishing institutions that
require quick applied payoff. Scientific discoveries should
be the criterion for continuing support of research projects
and research institutions. AI should be regarded as genetics
was before genetic engineering appeared or as like fundamental
physics.
8. Someone should identify the basic research in AI
that is going on and figure out how to increase it.
Here are some comments about some particular points of
the Gunkel draft.
9. "Its purpose would be to exploit the present critical
moment to reshape the general conception and course of artificial
intelligence research in the United States, . . .". I don't
want my research reshaped by any conference, and I don't want
to take part in reshaping anyone else's. I prefer to influence
other people's research by publishing papers and not by getting
my hands on their sources of financial support.
10. "Those countries that lead the world in AI are apt
to lead the world in other terms as well: in industrial growth,
world trade, per capita wealth, quality of life, science and
technology, education, social and political evolution, military
power, and cultural progress". This is seriously misleading.
Human level AI would indeed have all these effects, but that doesn't
seem to be in the cards immediately. In the meantime, AI should
not be regarded as a cure for all problems. Each of the above
matters requires separate efforts.
11. "AI researchers should seek from the outset to achieve
not merely artificial intelligence but artificial humanity: 'mechanical'
forms of emotion, purpose, imagination, creativity, character,
conscience, and kindness". This is a very bad idea of Gunkel's.
We want tools not artificial slaves who would eventually have
to be liberated. Fortunately, it isn't presently necessary to
harangue on the matter, since the possibility is far off. If it
were close, I would have to abandon my research and crusade against
Gunkel's recipe for disaster.
There are five more pages to comment on, but I'd rather
do research in AI. If the conference is held, I'll come lest
some bad grand plan come out of it. Of course, I would be pleased
if the conference were to recommend the measures proposed in
my points 4, 5 and 6.